Synergetic use of sensors for soil moisture retrieval is attracting considerable interest due to the different advantages of different sensors. Active, passive, and optic data integration could be a comprehensive solution for exploiting the advantages of different sensors aimed at preparing soil moisture maps. Typically, pixel-based methods are used for multi-sensor fusion. Since, different applications need different scales of soil moisture maps, pixel-based approaches are limited for this purpose. Object-based image analysis employing an image object instead of a pixel could help us to meet this need. This paper proposes a segment-based image fusion framework to evaluate the possibility of preparing a multi-scale soil moisture map through integrated Sentinel-1, Sentinel-2, and Soil Moisture Active Passive (SMAP) data. The results confirmed that the proposed methodology was able to improve soil moisture estimation in different scales up to 20% better compared to pixel-based fusion approach.
translated by 谷歌翻译
The problem of reversing the compilation process, decompilation, is an important tool in reverse engineering of computer software. Recently, researchers have proposed using techniques from neural machine translation to automate the process in decompilation. Although such techniques hold the promise of targeting a wider range of source and assembly languages, to date they have primarily targeted C code. In this paper we argue that existing neural decompilers have achieved higher accuracy at the cost of requiring language-specific domain knowledge such as tokenizers and parsers to build an abstract syntax tree (AST) for the source language, which increases the overhead of supporting new languages. We explore a different tradeoff that, to the extent possible, treats the assembly and source languages as plain text, and show that this allows us to build a decompiler that is easily retargetable to new languages. We evaluate our prototype decompiler, Beyond The C (BTC), on Go, Fortran, OCaml, and C, and examine the impact of parameters such as tokenization and training data selection on the quality of decompilation, finding that it achieves comparable decompilation results to prior work in neural decompilation with significantly less domain knowledge. We will release our training data, trained decompilation models, and code to help encourage future research into language-agnostic decompilation.
translated by 谷歌翻译
Over the years, Machine Learning models have been successfully employed on neuroimaging data for accurately predicting brain age. Deviations from the healthy brain aging pattern are associated to the accelerated brain aging and brain abnormalities. Hence, efficient and accurate diagnosis techniques are required for eliciting accurate brain age estimations. Several contributions have been reported in the past for this purpose, resorting to different data-driven modeling methods. Recently, deep neural networks (also referred to as deep learning) have become prevalent in manifold neuroimaging studies, including brain age estimation. In this review, we offer a comprehensive analysis of the literature related to the adoption of deep learning for brain age estimation with neuroimaging data. We detail and analyze different deep learning architectures used for this application, pausing at research works published to date quantitatively exploring their application. We also examine different brain age estimation frameworks, comparatively exposing their advantages and weaknesses. Finally, the review concludes with an outlook towards future directions that should be followed by prospective studies. The ultimate goal of this paper is to establish a common and informed reference for newcomers and experienced researchers willing to approach brain age estimation by using deep learning models
translated by 谷歌翻译
Utilizing autonomous drones or unmanned aerial vehicles (UAVs) has shown great advantages over preceding methods in support of urgent scenarios such as search and rescue (SAR) and wildfire detection. In these operations, search efficiency in terms of the amount of time spent to find the target is crucial since with the passing of time the survivability of the missing person decreases or wildfire management becomes more difficult with disastrous consequences. In this work, it is considered a scenario where a drone is intended to search and detect a missing person (e.g., a hiker or a mountaineer) or a potential fire spot in a given area. In order to obtain the shortest path to the target, a general framework is provided to model the problem of target detection when the target's location is probabilistically known. To this end, two algorithms are proposed: Path planning and target detection. The path planning algorithm is based on Bayesian inference and the target detection is accomplished by means of a residual neural network (ResNet) trained on the image dataset captured by the drone as well as existing pictures and datasets on the web. Through simulation and experiment, the proposed path planning algorithm is compared with two benchmark algorithms. It is shown that the proposed algorithm significantly decreases the average time of the mission.
translated by 谷歌翻译
Opinion summarisation synthesises opinions expressed in a group of documents discussing the same topic to produce a single summary. Recent work has looked at opinion summarisation of clusters of social media posts. Such posts are noisy and have unpredictable structure, posing additional challenges for the construction of the summary distribution and the preservation of meaning compared to online reviews, which has been so far the focus of opinion summarisation. To address these challenges we present \textit{WassOS}, an unsupervised abstractive summarization model which makes use of the Wasserstein distance. A Variational Autoencoder is used to get the distribution of documents/posts, and the distributions are disentangled into separate semantic and syntactic spaces. The summary distribution is obtained using the Wasserstein barycenter of the semantic and syntactic distributions. A latent variable sampled from the summary distribution is fed into a GRU decoder with a transformer layer to produce the final summary. Our experiments on multiple datasets including Twitter clusters, Reddit threads, and reviews show that WassOS almost always outperforms the state-of-the-art on ROUGE metrics and consistently produces the best summaries with respect to meaning preservation according to human evaluations.
translated by 谷歌翻译
Recent mean field interpretations of learning dynamics in over-parameterized neural networks offer theoretical insights on the empirical success of first order optimization algorithms in finding global minima of the nonconvex risk landscape. In this paper, we explore applying mean field learning dynamics as a computational algorithm, rather than as an analytical tool. Specifically, we design a Sinkhorn regularized proximal algorithm to approximate the distributional flow from the learning dynamics in the mean field regime over weighted point clouds. In this setting, a contractive fixed point recursion computes the time-varying weights, numerically realizing the interacting Wasserstein gradient flow of the parameter distribution supported over the neuronal ensemble. An appealing aspect of the proposed algorithm is that the measure-valued recursions allow meshless computation. We demonstrate the proposed computational framework of interacting weighted particle evolution on binary and multi-class classification. Our algorithm performs gradient descent of the free energy associated with the risk functional.
translated by 谷歌翻译
从随机实验获得的数据培训模型是做出良好决策的理想选择。但是,随机实验通常是耗时的,昂贵的,冒险的,不可行的或不道德的,决策者别无选择,只能依靠培训模型时在历史策略下收集的观察数据。这不仅为实践中的决策政策发挥了最佳作用,还为不同的数据收集协议对数据培训的各种政策的绩效的影响,或者在问题上的稳健性方面的稳健性,对问题的绩效提出了疑问诸如观察结果中的动作或奖励 - 特定延迟之类的特征。我们的目的是为了在LinkedIn优化销售渠道分配的问题回答此类问题,其中销售帐户(线索)需要分配给三个渠道之一,目的是在一段时间内最大程度地提高成功转换的数量。关键问题特征构成了观察分配结果的随机延迟,其分布既是通道和结果依赖性的。我们构建了一个离散的时间模拟,可以处理我们的问题功能并将其用于评估:a)基于历史规则的策略; b)有监督的机器学习政策(XGBOOST); c)多臂强盗(MAB)策略,在涉及的不同情况下:i)用于培训的数据收集(观察性与随机分组); ii)铅转换方案; iii)延迟分布。我们的仿真结果表明,Linucb是一种简单的mAB策略,始终优于其他策略,相对于基于规则的策略,实现了18-47%的提升
translated by 谷歌翻译
辅助机器人技术是一类机器人技术,涉及帮助人类在日常护理任务中,由于残疾或年龄,它们可能无法抑制这些任务。尽管研究表明,经典控制方法可用于设计政策以完成这些任务,但这些方法可能很难推广到任务的各种实例化。强化学习可以为此问题提供解决方案,在该问题中,在模拟中训练了机器人,并将其政策转移到现实世界中。在这项工作中,我们复制了公开的基线,用于培训辅助健身房环境中三个任务的机器人,并探讨了复发性神经网络和阶段性政策梯度学习的用法,以增强原始工作。我们的基线实施符合或超过原始工作的基线,但是,我们发现我们对新方法的探索并不像我们预期的那样有效。我们讨论了我们的基线结果,以及关于为什么我们的新方法不成功的一些想法。
translated by 谷歌翻译
我们提出了在概率密度函数(PDFS)的基础变量(即订单参数)的概率密度函数(PDF)中为胶体自组装的有限的随机最佳控制问题。控制目标是根据将状态PDF从规定的初始概率指标转向最小控制工作的规定终端概率指标的提出的。为了特异性,我们使用文献中的单变量随机状态模型。本文开发的分析和对照合成的计算步骤都推广为仿制药在状态中的多元随机状态动力学,在对照模型中给出了非伴随。我们为相关的最佳控制问题得出了最佳条件。该推导产生一个由三个耦合部分微分方程的系统,以及在初始和终端时间的边界条件。最终的系统是所谓的Schr \“ {O} dinger桥问题的广义实例。然后,我们通过训练物理知识的深神经网络来确定最佳控制策略,其中“物理学”是最优化的派生条件。通过基准胶体自组装问题的数值模拟,该解决方案的性能得到了证明。
translated by 谷歌翻译
道德框架和情感会影响各种在线和离线行为,包括捐赠,亲环境行动,政治参与,甚至参与暴力抗议活动。自然语言处理中的各种计算方法(NLP)已被用来从文本数据中检测道德情绪,但是为了在此类主观任务中取得更好的性能,需要大量的手工注销训练数据。事实证明,以前对道德情绪注释的语料库已被证明是有价值的,并且在NLP和整个社会科学中都产生了新的见解,但仅限于Twitter。为了促进我们对道德修辞的作用的理解,我们介绍了道德基础Reddit语料库,收集了16,123个reddit评论,这些评论已从12个不同的子雷迪维特策划,由至少三个训练有素的注释者手工注释,用于8种道德情绪(即护理,相称性,平等,纯洁,权威,忠诚,瘦道,隐含/明确的道德)基于更新的道德基础理论(MFT)框架。我们使用一系列方法来为这种新的语料库(例如跨域分类和知识转移)提供基线道德句子分类结果。
translated by 谷歌翻译